Goto

Collaborating Authors

 help prevent bias


Researchers Want Guardrails to Help Prevent Bias in AI

#artificialintelligence

Artificial intelligence has given us algorithms capable of recognizing faces, diagnosing disease, and of course, crushing computer games. But even the smartest algorithms can sometimes behave in unexpected and unwanted ways, for example picking up gender bias from the text or images they are fed. A new framework for building AI programs suggests a way to prevent aberrant behavior in machine learning by specifying guardrails in the code from the outset. It aims to be particularly useful for non-experts deploying AI, an increasingly common issue as the technology moves out of research labs and into the real world. The approach is one of several proposed in recent years for curbing the worst tendencies of AI programs.


IBM Debuts Tools to Help Prevent Bias In Artificial Intelligence

#artificialintelligence

IBM wants to help companies mitigate the chances that their artificial intelligence technologies unintentionally discriminate against certain groups like women and minorities. The technology giant's tool, announced on Wednesday, can inspect AI-powered software for unintentional bias when it makes decisions, like when a loan might be denied to a particular person, explained Ruchir Puri, the chief technology officer and chief architect of IBM Watson. The technology industry is increasingly combating the problem of bias in machine learning systems, used to power software that can automatically recognize images in pictures or translate languages. A number of companies have suffered a public relations black eye when their technologies failed to work as well for minority groups as for white users. For instance, researchers discovered that Microsoft and IBM's facial-recognition technology could more accurately identify the faces of lighter-skin males than darker-skin females.